A neural processing unit ( NPU), also known as an AI accelerator or deep learning processor, is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence (AI) and machine learning applications, including artificial neural networks and computer vision.
Use
Their purpose is either to efficiently execute already trained AI models (inference) or to train AI models. Their applications include
for
robotics, Internet of things, and data-intensive or sensor-driven tasks.
[ Google using its own AI accelerators.] They are often manycore or spatial designs and focus on low-precision arithmetic, novel dataflow architectures, or in-memory computing capability. , a widely used datacenter-grade AI integrated circuit chip, the
Nvidia H100 GPU,
transistor count of
.
Consumer devices
AI accelerators are used in mobile devices such as Apple
, AMD
AI engine in Versal and NPUs,
Huawei, and
Google Pixel smartphones,
and seen in many
Apple silicon,
Qualcomm,
Samsung, and
Google Tensor smartphone processors.
It is more recently (circa 2022) added to computer processors from Intel, AMD, and Apple silicon. All models of Intel Meteor Lake processors have a built-in versatile processor unit ( VPU) for accelerating inference for computer vision and deep learning.
On consumer devices, the NPU is intended to be small, power-efficient, but reasonably fast when used to run small models. To do this they are designed to support low-bitwidth operations using data types such as INT4, INT8, Minifloat, and FP16. A common metric is trillions of operations per second (TOPS), though this metric alone does not quantify which kind of operations are being performed.
Datacenters
(TPU) v4 package (ASIC in center plus 4 HBM stacks) and printed circuit board (PCB) with 4 liquid-cooled packages; the board's front panel has 4 top-side PCIe connectors (2023).]]Accelerators are used in
cloud computing servers: e.g., tensor processing units (TPU) for Google Cloud Platform,
and
Trainium and
Inferentia chips for Amazon Web Services.
Many vendor-specific terms exist for devices in this category, and it is an emerging technology without a
dominant design.
Since the late 2010s, graphics processing units designed by companies such as Nvidia and AMD often include AI-specific hardware in the form of dedicated functional units for low-precision matrix-multiplication operations. These GPUs are commonly used as AI accelerators, both for Machine learning and Inference engine.
Scientific computation
Although NPUs are tailored for low-precision (e.g. FP16, INT8) matrix multiplication operations, they can be used to emulate higher-precision matrix multiplications in scientific computing. As modern GPUs place much focus on making the NPU part fast, using emulated FP64 (Ozaki scheme) on NPUs can potentially outperform native FP64: this has been demonstrated using FP16-emulated FP64 on NVIDIA TITAN RTX and using INT8-emulated FP64 on NVIDIA consumer GPUs and the A100 GPU. (Consumer GPUs are especially benefitted by this scheme as they have small amounts of FP64 hardware capacity, showing a 6× speedup.)
Since CUDA Toolkit 13.0 Update 2, cuBLAS automatically uses INT8-emulated FP64 matrix multiplication of the equivalent precision if it's faster than native. This is in addition to the FP16-emulated FP32 feature introduced in version 12.9.
Programming
An operating system or a higher-level library may provide application programming interfaces such as
TensorFlow Lite with LiteRT Next (Android) or CoreML (iOS, macOS). Formats such as
ONNX are used to represent trained neural networks.
Consumer CPU-integrated NPUs are accessible through vendor-specific APIs. AMD (Ryzen AI), Intel (OpenVINO), Apple silicon (CoreML), and Qualcomm (SNPE) each have their own APIs, which can be built upon by a higher-level library.
GPUs generally use existing GPGPU pipelines such as CUDA and OpenCL adapted for lower precisions and specialized matrix-multiplication operations. Vulkan is also being used. Custom-built systems such as the Google TPU use private interfaces.
There are a large number of separate underlying acceleration APIs and compilers/runtimes in use in the AI field, causing a great increase in software development effort due to the many combinations involved. As of 2025, the open standard organization Khronos Group is pursuing standardization of AI-related interfaces to reduce the amount of work needed. Khronos is working on three separate fronts: expansion of data types and intrinsic operations in OpenCL and Vulkan, inclusion of compute graphs in SPIR-V, and a NNEF/SkriptND file format for describing a neural network.
Notes
See also
External links